We discover a robust self-supervised strategy tailored towards molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pre-training strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks setting a new state-of-the-art on 11 tasks. We then quantitatively show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and the pretrained model are publicly available.
translated by 谷歌翻译
为化疗中的许多重要任务收集标记数据是耗时的,需要昂贵的实验。近年来,机器学习已被用来使用大规模未标记的分子数据集学习分子的丰富表示,并转移知识,以解决有限数据集的更具挑战性的任务。变形AutoEncoders是已经提出用于进行化学性质预测和分子产生任务的转移的工具之一。在这项工作中,我们提出了一种简单的方法,可以通过在变形自身偏析者学习的表示中包含关于相关分子描述符的附加信息来改善机器学习模型的化学性质预测性能。我们验证了三个属性预测的方法询问。我们探讨了合并的描述符的数量,描述符和目标属性之间的相关性,数据集等的尺寸的影响。最后,我们显示了性能预测模型的性能与属性预测数据集之间的距离和更大的未标记之间的关系。 DataSet在表示空间中。
translated by 谷歌翻译